Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Three-way screening method of basic clustering for ensemble clustering
XU Jianfeng, ZOU Weikang, LIANG Wei, CHENG Gaojie, ZHANG Yuanjian
Journal of Computer Applications    2019, 39 (11): 3120-3126.   DOI: 10.11772/j.issn.1001-9081.2019050864
Abstract369)      PDF (985KB)(225)       Save
At present, the researches of ensemble clustering mainly focus on the optimization of ensemble strategy, while the measurement and optimization of the quality of basic clustering are rarely studied. On the basis of information entropy theory, a quality measurement index of basic clustering was proposed, and a three-way screening method for basic clustering was constructed based on three-way decision. Firstly, α, β were reset as the thresholds of three-way decision of basic clustering screening. Secondly, the average cluster quality of each basic clustering was calculated and was used as the quality measurement index of each basic clustering. Finally, the three-way decision was implemented. For one three-way screening, its decision strategy is:1) deleting the basic clustering if the quality measurement index of the basic clustering is less than the threshold β; 2) keeping the basic clustering if the quality measurement index of the basic clustering is greater than or equals to the threshold α; 3) recalculating the quality of a basic clustering and if the quality measurement index of the basic clustering is greater than β and less than α or equals to β. For the third option, the decision process continues until there is no deletion of basic clustering or reaching the times of iteration. The comparative experiments show that the three-way screening method of basic clustering can effectively improve the ensemble clustering effects.
Reference | Related Articles | Metrics
Imperialist competitive algorithm based on multiple search strategy for solving traveling salesman problem
CHEN Menghui, LIU Junlin, XU Jianfeng, LI Xiangjun
Journal of Computer Applications    2019, 39 (10): 2992-2996.   DOI: 10.11772/j.issn.1001-9081.2019030434
Abstract319)      PDF (802KB)(230)       Save
The imperialist competitive algorithm is a swarm intelligence optimization algorithm with strong local search ability, but excessive local search will lead to the loss of diversity and fall into local optimum. Aiming at this problem, an Imperialist Competitive Algorithm based on Multiple Search Strategy (MSSICA) was proposed. The country was defined as a feasible solution and the kingdoms were defined as four mechanisms of combinatorial artificial chromosome with different characteristics. The block mechanism was used to retain the dominant solution fragment during search and differentiated mechanisms of combinatorial artificial chromosome was used for different empires to search the effective and feasible solution information of different solution spaces. When it come to the local optimum, the multiple search strategy was used to inject a uniformly distributed feasible solution to replace a less advantageous solution to enhance the diversity. Experimental results show that the multiple search strategy can effectively improve diversity of the imperialist competitive algorithm and improve the quality and stability of the solution.
Reference | Related Articles | Metrics
Short-term lightning prediction based on multi-machine learning competitive strategy
SUN LiHua, YAN Junfeng, XU Jianfeng
Journal of Computer Applications    2016, 36 (9): 2555-2559.   DOI: 10.11772/j.issn.1001-9081.2016.09.2555
Abstract525)      PDF (789KB)(371)       Save
The traditional lightning data forecasting methods often use single optimal machine learning algorithm to forecast, not considering the spatial and temporal variations of meteorological data. For this phenomenon, an ensemble learning based multi-machine learning model was put forward. Firstly, attribute reduction was conducted for meteorological data to reduce dimension; secondly, multiple heterogeneous machine learning classifiers were trained on data set and optimal base classifier was screened based on predictive quality; finally, the final classifier was generated after weighted training for optimal base classifier by using ensemble strategy. The experimental results show that, compared with the traditional single optimal algorithm, the prediction accuracy of the proposed model is increased by 9.5% on average.
Reference | Related Articles | Metrics
Temporal similarity algorithm of coarse-granularity based dynamic time warping
CHEN Mingwei, SUN Lihua, XU Jianfeng
Journal of Computer Applications    2016, 36 (6): 1639-1644.   DOI: 10.11772/j.issn.1001-9081.2016.06.1639
Abstract479)      PDF (974KB)(430)       Save
The Dynamic Time Warping (DTW) algorithm cannot keep high classification accuracy while improving the computation speed. In order to solve the problem, a Coarse-Granularity based Dynamic Time Warping (CG-DTW) algorithm based on the idea of naive granular computing was proposed. First of all, the better temporal granularities were obtained by computing temporal variance features, and the original series were replaced by granularity features. Then, the relatively optimal corresponding temporal granularity was obtained by executing DTW with dynamically adjusting intergranular elasticity of granularities compared. Finally, the DTW distance was calculated in the case of the corresponding optimal granularity. During this progress, an early termination strategy of lower bound function was introduced for further improving the CG-DTW algorithm efficiency. The experimental results show that, the proposed algorithm was better than classical algorithm in running rate with increasing by about 21.4%, and better than dimension reduction strategy algorithm in accuracy with increasing by about 32.3 percentage points.Especially for the long time sequences classification, CG-DTW takes consideration into both high computing speed and better classification accuracy. In actual applications, CG-DTW can adapt to long time sequences classification with uncertain length.
Reference | Related Articles | Metrics